24 research outputs found
Constrained Optimization for a Subset of the Gaussian Parsimonious Clustering Models
The expectation-maximization (EM) algorithm is an iterative method for
finding maximum likelihood estimates when data are incomplete or are treated as
being incomplete. The EM algorithm and its variants are commonly used for
parameter estimation in applications of mixture models for clustering and
classification. This despite the fact that even the Gaussian mixture model
likelihood surface contains many local maxima and is singularity riddled.
Previous work has focused on circumventing this problem by constraining the
smallest eigenvalue of the component covariance matrices. In this paper, we
consider constraining the smallest eigenvalue, the largest eigenvalue, and both
the smallest and largest within the family setting. Specifically, a subset of
the GPCM family is considered for model-based clustering, where we use a
re-parameterized version of the famous eigenvalue decomposition of the
component covariance matrices. Our approach is illustrated using various
experiments with simulated and real data
Finite mixtures of matrix-variate Poisson-log normal distributions for three-way count data
Three-way data structures, characterized by three entities, the units, the
variables and the occasions, are frequent in biological studies. In RNA
sequencing, three-way data structures are obtained when high-throughput
transcriptome sequencing data are collected for n genes across p conditions at
r occasions. Matrix-variate distributions offer a natural way to model
three-way data and mixtures of matrix-variate distributions can be used to
cluster three-way data. Clustering of gene expression data is carried out as
means to discovering gene co-expression networks. In this work, a mixture of
matrix-variate Poisson-log normal distributions is proposed for clustering read
counts from RNA sequencing. By considering the matrix-variate structure, full
information on the conditions and occasions of the RNA sequencing dataset is
simultaneously considered, and the number of covariance parameters to be
estimated is reduced. A Markov chain Monte Carlo expectation-maximization
algorithm is used for parameter estimation and information criteria are used
for model selection. The models are applied to both real and simulated data,
giving favourable clustering results